41 research outputs found

    Motor interference in interactive contexts

    Get PDF
    Action observation and execution share overlapping neural substrates, so that simultaneous activation by observation and execution modulates motor performance. Previous literature on simple prehension tasks has revealed that motor influence can be two-sided: facilitation for observed and performed congruent actions and interference for incongruent actions. But little is known of the specific modulations of motor performance in complex forms of interaction. Is it possible that the very same observed movement can lead either to interference or facilitation effects on a temporally overlapping congruent executed action, depending on the context? To answer this question participants were asked to perform a reach-to-grasp movement adopting a precision grip (PG) while: (i) observing a fixation cross, (ii) observing an actor performing a PG with interactive purposes, (iii) observing an actor performing a PG without interactive purposes. In particular, in the interactive condition the actor was shown trying to pour some sugar on a large cup located out of her reach but close to the participant watching the video, thus eliciting in reaction a complementary whole-hand grasp. Notably, fine-grained kinematic analysis for this condition revealed a specific delay in the grasping and reaching components and an increased trajectory deviation despite the observed and executed movement’s congruency. Moreover, early peaks of trajectory deviation seem to indicate that socially relevant stimuli are acknowledged by the motor system very early. These data suggest that interactive contexts can determine a prompt modulation of stimulus–response compatibility effects

    Feature space analysis for human activity recognition in smart environments

    Get PDF
    Activity classification from smart environment data is typically done employing ad hoc solutions customised to the particular dataset at hand. In this work we introduce a general purpose collection of features for recognising human activities across datasets of different type, size and nature. The first experimental test of our feature collection achieves state of the art results on well known datasets, and we provide a feature importance analysis in order to compare the potential relevance of features for activity classification in different datasets

    Social Motor Priming: when offline interference facilitates motor execution

    Get PDF
    Many daily activities involve synchronizing with other people\u2019s actions. Previous literature has revealed that a slowdown of performance occurs whenever the action to be carried out is different to the one observed (i.e., visuomotor interference). However, action execution can be facilitated by observing a different action if it calls for an interactive gesture (i.e., social motor priming). The aim of this study is to investigate the costs and benefits of spontaneously processing a social response and then executing the same or a different action. Participants performed two different types of grips, which could be either congruent or not with the socially appropriate response and with the observed action. In particular, participants performed a precision grip (PG; thumb-index fingers opposition) or a whole-hand grasp (WHG; fingers-palm opposition) after observing videos showing an actor performing a PG and addressing them (interactive condition) or not (non-interactive condition). Crucially, in the interactive condition, the most appropriate response was a WHG, but in 50 percent of trials participants were asked to perform a PG. This procedure allowed us to measure both the facilitator effect of performing an action appropriate to the social context (WHG)\u2014but different with respect to the observed one (PG)\u2014and the cost of inhibiting it. These effects were measured by means of 3-D kinematical analysis of movement. Results show that, in terms of reaction time and movement time, the interactive request facilitated (i.e., speeded) the socially appropriate action (WHG), whereas interfered with (i.e., delayed) a different action (PG), although observed actions were always PGs. This interference also manifested with an increase of maximum grip aperture, which seemingly reflects the concurrent representation of the socially appropriate response. Overall, these findings extend previous research by revealing that physically incongruent action representations can be integrated into a single action plan even during an offline task and without any training

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    How deeply do we include robotic agents in the self?

    Get PDF
    In human–human interactions, a consciously perceived high degree of self–other overlap is associated with a higher degree of integration of the other person's actions into one's own cognitive representations. Here, we report data suggesting that this pattern does not hold for human–robot interactions. Participants performed a social Simon task with a robot, and afterwards indicated the degree of self–other overlap using the Inclusion of the Other in the Self (IOS) scale. We found no overall correlation between the social Simon effect (as an indirect measure of self–other overlap) and the IOS score (as a direct measure of self–other overlap). For female participants we even observed a negative correlation. Our findings suggest that conscious and unconscious evaluations of a robot may come to different results, and hence point to the importance of carefully choosing a measure for quantifying the quality of human–robot interactions

    Unsupervised grounding of textual descriptions of object features and actions in video

    Get PDF
    We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing

    Biologically-Inspired 3D Grasp Synthesis Based on Visual Exploration

    Get PDF
    Object grasping is a typical human ability which is widely studied from both a biological and an engineering point of view. This paper presents an approach to grasp synthesis inspired by the human neurophysiology of actionoriented vision. Our grasp synthesis method is built upon an architecture which, taking into account the differences between robotic and biological systems, proposes an adaptation of brain models to the peculiarities of robotic setups. The architecture modularity allows for scalability and integration of complex robotic tasks. The grasp synthesis is designed as integrated with the extraction of a 3D object description, so that the object visual analysis is actively driven by the needs of the grasp synthesis: visual reconstruction is performed incrementally and selectively on the regions of the object that are considered more interesting for graspin
    corecore